Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Detection method of domains generated by dictionary-based domain generation algorithm
ZHANG Yongbin, CHANG Wenxin, SUN Lianshan, ZHANG Hang
Journal of Computer Applications    2021, 41 (9): 2609-2614.   DOI: 10.11772/j.issn.1001-9081.2020111837
Abstract396)      PDF (893KB)(298)       Save
The composition of domain names generated by the dictionary-based Domain Generation Algorithm (DGA) is very similar to that of benign domain names and it is difficult to effectively detect them with the existing technology. To solve this problem, a detection model was proposed, namely CL (Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network). The model includes three parts:character embedding layer, feature extraction layer and fully connected layer. Firstly, the characters of the input domain name were encoded by the character embedding layer. Then, the features of the domain name were extracted by connecting CNN and LSTM in serial way through the feature extraction layer. The n-grams features of the domain name were extracted by CNN and the extracted result were sent to LSTM to learn the context features between n-grams. Meanwhile, different combinations of CNNs and LSTMs were used to learn the features of n-grams with different lengths. Finally, the dictionary-based DGA domain names were classified and predicted by the fully connected layer according to the extracted features. Experimental results show that when the CNNs select the convolution kernel sizes of 3 and 4, the proposed model achives the best performance. In the four dictionary-based DGA family experiments, the accuracy of the CL model is improved by 2.20% compared with that of the CNN model. And with the increase of the number of sample families, the CL network model has a better stability.
Reference | Related Articles | Metrics
Image super-resolution reconstruction algorithm based on Laplacian pyramid generative adversarial network
DUAN Youxiang, ZHANG Hanxiao, SUN Qifeng, SUN Youkai
Journal of Computer Applications    2021, 41 (4): 1020-1026.   DOI: 10.11772/j.issn.1001-9081.2020081299
Abstract572)      PDF (1652KB)(1142)       Save
Concerning the problems of poor reconstructing performance with large-scale factors and requirement of separate training in image reconstruction with different scales in current image super-resolution reconstruction algorithms, an image super-resolution reconstruction algorithm based on Laplacian pyramid Generative Adversarial Network(GAN) was proposed. The pyramid structure generator of the proposed algorithm was used to realize the multi-scale image reconstruction, so as to reduce the difficulty in learning large-scale factors by progressive up-sampling, and dense connection was used between layers to enhance feature propagation, which effectively avoided the vanishing gradient problem. In the algorithm, Markovian discriminator was used to map the input data into the result matrix, and the generator was guided to pay attention to the local features of the image in the process of training, which enriched the details of the reconstructed images. Experimental results show that, when performing 2-times, 4-times and 8-times image reconstruction on Set5 and other benchmark datasets, the average Peak Signal-to-Noise Ratio(PSNR) of the proposed algorithm reaches 33.97 dB, 29.15 dB, 25.43 dB respectively, and the average Structural SIMilarity(SSIM) of the algorithm reaches 0.924, 0.840, 0.667 respectively, outperforming to those of other algorithms such as Super Resolution Convolutional Neural Network(SRCNN), fast and accurate image Super-Resolution with deep Laplacian pyramid Network(LapSRN) and Super-Resolution GAN(SRGAN), and the images reconstructed by the proposed algorithm retain more vivid textures and fine-grained details in subjective vision.
Reference | Related Articles | Metrics
Erasure code with low recovery-overhead in distributed storage systems
ZHANG Hang, LIU Shanzheng, TANG Dan, CAI Hongliang
Journal of Computer Applications    2020, 40 (10): 2942-2950.   DOI: 10.11772/j.issn.1001-9081.2020010127
Abstract393)      PDF (1250KB)(928)       Save
Erasure code technology is a typical data fault tolerance method in distributed storage systems. Compared with multi-copy technology, it can provide high data reliability with low storage overhead. However, the high repair cost limits the practical applications of erasure code technology. Aiming at problems of high repair cost, complex encoding and poor flexibility of existing erasure codes, a simple-encoding erasure code with low repair cost - Rotation Group Repairable Code (RGRC) was proposed. According to RGRC, multiple strips were combined into a strip set at first. After that, the association relationship between the strips was used to hierarchically rotate and encode the data blocks in the strip set to obtain the corresponding redundant blocks. RGRC greatly reduced the amount of data needed to be read and transmitted in the process of single-node repair, thus saving a lot of network bandwidth resources. Meanwhile, RGRC still retained high fault tolerance when solving the problem of high repair cost of a single node. And, in order to meet the different needs of distributed storage systems, RGRC was able to flexibly weigh the storage overhead and repair cost of the system. Comparison experiments were conducted on a distributed storage system, the experimental analysis shows that compared with RS (Reed-Solomon) codes, LRC (Locally Repairable Codes), basic-Pyramid, DLRC (Dynamic Local Reconstruction Codes), pLRC (proactive Locally Repairable Codes), GRC (Group Repairable Codes) and UFP-LRC (Unequal Failure Protection based Local Reconstruction Codes), RGRC can reduce the repair cost of single node repair by 14%-61% through adding a small amount of storage overhead, and reduces the repair time by 14%-58%.
Reference | Related Articles | Metrics
Directed fuzzing method for binary programs
ZHANG Hanfang, ZHOU Anmin, JIA Peng, LIU Luping, LIU Liang
Journal of Computer Applications    2019, 39 (5): 1389-1393.   DOI: 10.11772/j.issn.1001-9081.2018102194
Abstract670)      PDF (899KB)(461)       Save
In order to address the problem that the mutation in the current fuzzing has certain blindness and the samples generated by the mutation mostly pass through the same high-frequency paths, a binary fuzzing method based on light-weight program analysis technology was proposed and implemented. Firstly, the target binary program was statically analyzed to filter out the comparison instructions which hinder the sample files from penetrating deeply into the program during the fuzzing process. Secondly, the target binary program was instrumented to obtain the specific values of the operands in the comparison instructions, according to which the real-time comparison progress information for each comparison instruction was established, and the importance of each sample was measured according to the comparison progress information. Thirdly, the real-time path coverage information in the fuzzing process was used to increase the probability that the samples passing through rare paths were selected to be mutated. Finally, the input files were directed and mutated by the comparison progress information combining with a heuristic strategy to improve the efficiency of generating valid inputs that could bypass the comparison checks in the program. The experimental results show that the proposed method is better than the current binary fuzzing tool AFL-Dyninst both in finding crashes and discovering new paths.
Reference | Related Articles | Metrics
Spatio-temporal query algorithm based on Hilbert-R tree hierarchical index
HOU Haiyao, QIAN Yurong, YING Changtian, ZHANG Han, LU Xueyuan, ZHAO Yi
Journal of Computer Applications    2018, 38 (10): 2869-2874.   DOI: 10.11772/j.issn.1001-9081.2018040749
Abstract1023)      PDF (993KB)(334)       Save
Aiming at the problem of multi-path query in tree-spatial index and not considering temporal index, A Hilbert-R tree index construction scheme combining time and clustering results was proposed. Firstly, according to the periodicity of data collection, the spatial-temporal dataset was divided, and on this basis, a time index was established. The spatial data was partitioned and encoded by the Hilbert curve, and the spatial coordinates were mapped to one-dimensional intervals. Secondly, according to the distribution of the feature object in space, a clustering algorithm using dynamic determination of K value was adopted, to build an efficient Hilbert-R tree spatial index. Finally, based on several common key-value data structures of Redis, the hierarchical indexing mechanism of time attributes and clustering results was built. Compared with the Cache Conscious R+tree (CCR+), the proposed algorithm can effectively reduce the time overhead, and the query time is shortened by about 25% on average in the experiment of spatial-temporal range and target vector object query. It has good adaptability to different intensive data and can better support Redis for massive spatio-temporal data queries.
Reference | Related Articles | Metrics
Virtual machine anomaly detection algorithm based on detection region dividing
WU Tianshu, CHEN Shuyu, ZHANG Hancui, ZHOU Zhen
Journal of Computer Applications    2016, 36 (4): 1066-1069.   DOI: 10.11772/j.issn.1001-9081.2016.04.1066
Abstract572)      PDF (624KB)(476)       Save
The stable operation of virtual machine is an important support of cloud service. Because of the tremendous amount of virtual machine and their changing status, it is hard for management system to train classifier for each virtual machine individually. In order to improve the performance of real-time performance and detection ability, a new dividing mechanism based on modified k-medoids clustering algorithm for dividing virtual machine detection region was proposed, the iterate process of clustering was optimized to improve the speed of dividing detection region, and the efficiency and accuracy of anomaly detection were enhanced consequently by using this proposed detecting region strategy. Experiments and analysis show that the modified clustering algorithm has lower time complixity, the detection method with dividing detection region performs better than the original algorithm in efficiency and accuracy.
Reference | Related Articles | Metrics
Adaptive variable step-size blind source separation algorithm based on nonlinear principal component analysis
GU Fanglin ZHANG Hang LI Lunhui
Journal of Computer Applications    2013, 33 (05): 1233-1236.   DOI: 10.3724/SP.J.1087.2013.01233
Abstract753)      PDF (591KB)(674)       Save
The design of the step-size is crucial to the convergence rate of the Nonlinear Principle Component Analysis (NPCA) algorithm. However, the commonly used fixed step-size algorithm can hardly satisfy the convergence speed and estimation precision requirements simultaneously. To address this issue, the gradient-based adaptive step-size NPCA algorithm and optimal step-size NPCA algorithm were proposed to speed up the convergence rate and improve tracking ability. In particular, the optimal step-size NPCA algorithm linearly approximated the contrast function and figured out the optimal step-size currently. The optimal step-size NPCA algorithm utilized an adaptive step-size whose value was adjusted in sympathy with the value of the contrast function and free from any manual parameters. The simulation results show that the proposed adaptive step-size NPCA algorithms have faster convergence rate or better tracking ability in comparison with the fixed step-size NPCA algorithm when the estimation precisions are same. The convergence performance of the optimal step-size NPCA algorithm is superior to that of the gradient-based adaptive NPCA algorithm.
Reference | Related Articles | Metrics
Reliable peer exchange mechanism based on semi-distributed peer-to-peer system
ZHANG Han ZHANG Jianbiao LIN Li
Journal of Computer Applications    2013, 33 (01): 4-7.   DOI: 10.3724/SP.J.1087.2013.00004
Abstract952)      PDF (803KB)(631)       Save
Peer Exchange (PEX) technique used vastly among Peer-to-Peer (P2P) systems brings more peers along with security leak. Malicious peer can pollute normal peer's neighbor table by exploiting peer exchange. First, this paper analyzed the leak and discussed the main reasons. Second, based on the analysis, a peer exchange mechanism based on semi-distributed peer-to-peer system was proposed. It introduced an approach to estimate the super node's trust value based on incentive mechanism. A concept of peer's source trust value, which is the foundation of the mechanism proposed in this paper, was proposed also. By using peer's source trust value, the goal of controlling peer exchange was finally achieved. The experimental results show that, due to the trust value miscalculation caused by the heterogeneity of the network, 2.5% good peers are denied being exchanged, pollution of good peers' neighbor table and passive infection from good peers are significantly reduced due to the mechanism proposed in this paper. System reliability is guaranteed then.
Reference | Related Articles | Metrics
Improved approach of hybrid formation for multi-mobile robots
ZHANG Han-dong HUANG li CEN Yu-wan
Journal of Computer Applications    2012, 32 (07): 1955-1957.   DOI: 10.3724/SP.J.1087.2012.01955
Abstract1013)      PDF (615KB)(652)       Save
In a complex environment and concerning the problems of choosing parameters brought by the mixture of two multi-robot formation methods, the Leader-Follower method and a behavior-based method, this paper improved two methods and optimized five kinds of behavior parameters online to make the multi-robot formation better with the help of Particle Swarm Optimization (PSO) algorithm. The simulation results validate that the proposed algorithm is feasible and it achieves expected optimization effects.
Reference | Related Articles | Metrics
Quality evaluation of halftone based on visual perception similarity
ZHANG Han-bing
Journal of Computer Applications    2011, 31 (10): 2750-2752.   DOI: 10.3724/SP.J.1087.2011.02750
Abstract1026)      PDF (578KB)(558)       Save
To measure quality of halftones or halftoning algorithms, this paper proposed Mean Luminance Similarity Error (MLSE), Mean Contrast Similarity Error (MCSE) and Visual Perception Similarity Error (VPSE) to measure the visual perception similarity between halftones and their original continuous-tone images. In the proposed method, an input image was divided into several blocks according to local adaptation of human eyes, and luminance similarity error and contrast similarity error in each block were computed according to luminance adaptation and contrast masking on visual perception. Finally, the MLSE, MCSE and VPSE were computed to evaluate luminance similarity, texture similarity and local unexpected artifacts respectively. The experimental results show that the proposed quality measures are more agreeable to HVS in comparison with Peak Signal to Noise Ratio (PSNR) and Weighted Signal to Noise Ratio (WSNR) in terms of luminance similarity and in comparison with Unique Quality Index (QUI) and Structure Similarity Index Measure (SSIM) in terms of texture similarity.
Related Articles | Metrics
Low-power NoC router design
ZHOU Duan PENG Jing ZHANG Jian-xian ZHANG Han
Journal of Computer Applications    2011, 31 (10): 2621-2624.   DOI: 10.3724/SP.J.1087.2011.02621
Abstract1097)      PDF (619KB)(588)       Save
Concerning the power consumption of routers based on Network on Chip (NoC), the key factors that influence the router power consumption were studied on the systematic level, such as the number of virtual channels, the depth of buffer and the width of flit. A method of reducing power consumption was presented by incorporating various key factors of power consumption and making the virtual channels share the switch input port. And then a NoC-based router with low-power consumption was implemented. The experimental results show that the designed router has lower power consumption in comparison with the Alpha 21364 router and the IBM InfiniBand router.
Related Articles | Metrics